48 research outputs found

    The refocusing distance of a standard plenoptic photograph

    Get PDF
    IEEE International Conference PaperIn the past years, the plenoptic camera aroused an increasing interest in the field of computer vision. Its capability of capturing three-dimensional image data is achieved by an array of micro lenses placed in front of a traditional image sensor. The acquired light field data allows for the reconstruction of photographs focused at different depths. Given the plenoptic camera parameters, the metric distance of refocused objects may be retrieved with the aid of geometric ray tracing. Until now there was a lack of experimental results using real image data to prove this conceptual solution. With this paper, the very first experimental work is presented on the basis of a new ray tracing model approach, which considers more accurate micro image centre positions. To evaluate the developed method, the blur metric of objects in a refocused image stack is measured and compared with proposed predictions. The results suggest quite an accurate approximation for distant objects and deviations for objects closer to the camera device

    Real-time refocusing using an FPGA-based standard plenoptic camera

    Get PDF
    Plenoptic cameras are receiving increased attention in scientific and commercial applications because they capture the entire structure of light in a scene, enabling optical transforms (such as focusing) to be applied computationally after the fact, rather than once and for all at the time a picture is taken. In many settings, real-time inter active performance is also desired, which in turn requires significant computational power due to the large amount of data required to represent a plenoptic image. Although GPUs have been shown to provide acceptable performance for real-time plenoptic rendering, their cost and power requirements make them prohibitive for embedded uses (such as in-camera). On the other hand, the computation to accomplish plenoptic rendering is well structured, suggesting the use of specialized hardware. Accordingly, this paper presents an array of switch-driven finite impulse response filters, implemented with FPGA to accomplish high-throughput spatial-domain rendering. The proposed architecture provides a power-efficient rendering hardware design suitable for full-video applications as required in broadcasting or cinematography. A benchmark assessment of the proposed hardware implementation shows that real-time performance can readily be achieved, with a one order of magnitude performance improvement over a GPU implementation and three orders ofmagnitude performance improvement over a general-purpose CPU implementation

    Photonic mixer incorporating all-optical microwave frequency generator based on stimulated brillouin scattering using single laser source

    Get PDF
    © 2020 The Authors. Published by IEEE. This is an open access article available under a Creative Commons licence. The published version can be accessed at the following link on the publisher’s website: https://doi.org/10.1109/ACCESS.2020.2975667In this paper, we report the theoretical and experimental implementation of a photonic mixer for Radio-Over-Fiber (RoF) transmission systems, which incorporates an all-optical 10.87 GHz microwave frequency signal generator based on beating laser frequency with its first order Stimulated Brillouin Scattering (SBS) frequency shift. A 13GHz Radio Frequency (RF) is down-converted to 2.13 GHz Intermediate Frequency (IF) signal. The proposed system configuration represents a cost-effective photonic mixer that can be deployed for up and down conversion around 11 GHz in RoF transmission systems. The optically generated microwave signal of 10.87 GHz has a phase noise of -109 dBc/Hz at 15-MHz offset. The proposed photonic mixer exhibits a Spurious-Free Dynamic Range (SFDR) of 93dB.Hz 2/3. This RoF transmission system configuration deploys dual parallel Gallium Arsenide (GaAs) Mach Zehnder Modulator as a photonic mixer, and a single laser source as a Brillouin pump and as an optical carrier at the same time. To the best of our knowledge, this type of photonic mixers has not been reported in the literature.This work was supported in part by the Leonardo–Electronics, Defense and Security Systems, Grant RF Broadband Project, under Grant RES-15287.Published versio

    PlenoptiCam v1.0: A light-field imaging framework

    Get PDF
    This is an accepted manuscript of an article published by IEEE in IEEE Transactions on Image Processing on 19/07/2021. Available online: https://doi.org/10.1109/TIP.2021.3095671 The accepted version of the publication may differ from the final published version.Light-field cameras play a vital role for rich 3-D information retrieval in narrow range depth sensing applications. The key obstacle in composing light-fields from exposures taken by a plenoptic camera is to computationally calibrate, re-align and rearrange four-dimensional image data. Several attempts have been proposed to enhance the overall image quality by tailoring pipelines dedicated to particular plenoptic cameras and improving the color consistency across viewpoints at the expense of high computational loads. The framework presented herein advances prior outcomes thanks to its cost-effective color equalization from parallax-invariant probability distribution transfers and a novel micro image scale-space analysis for generic camera calibration independent of the lens specifications. Our framework compensates for artifacts from the sensor and micro lens grid in an innovative way to enable superior quality in sub-aperture image extraction, computational refocusing and Scheimpflug rendering with sub-sampling capabilities. Benchmark comparisons using established image metrics suggest that our proposed pipeline outperforms state-of-the-art tool chains in the majority of cases. The algorithms described in this paper are released under an open-source license, offer cross-platform compatibility with few dependencies and a graphical user interface. This makes the reproduction of results and experimentation with plenoptic camera technology convenient for peer researchers, developers, photographers, data scientists and others working in this field

    A low computational approach for assistive esophageal adenocarcinoma and colorectal cancer detection

    Get PDF
    © Springer Nature Switzerland AG 2019. In this paper, we aim to develop a low-computational system for real-time image processing and analysis in endoscopy images for the early detection of the human esophageal adenocarcinoma and colorectal cancer. Rich statistical features are used to train an improved machine-learning algorithm. Our algorithm can achieve a real-time classification of malign and benign cancer tumours with a significantly improved detection precision compared to the classical HOG method as a reference when it is implemented on real time embedded system NVIDIA TX2 platform. Our approach can help to avoid unnecessary biopsies for patients and reduce the over diagnosis of clinically insignificant cancers in the future.Published versio

    Baseline and triangulation geometry in a standard plenoptic camera

    Get PDF
    In this paper, we demonstrate light field triangulation to determine depth distances and baselines in a plenoptic camera. The advancement of micro lenses and image sensors enabled plenoptic cameras to capture a scene from different viewpoints with sufficient spatial resolution. While object distances can be inferred from disparities in a stereo viewpoint pair using triangulation, this concept remains ambiguous when applied in case of plenoptic cameras. We present a geometrical light field model allowing the triangulation to be applied to a plenoptic camera in order to predict object distances or to specify baselines as desired. It is shown that distance estimates from our novel method match those of real objects placed in front of the camera. Additional benchmark tests with an optical design software further validate the model’s accuracy with deviations of less than 0:33 % for several main lens types and focus settings. A variety of applications in the automotive and robotics field can benefit from this estimation model

    Light field geometry of a standard plenoptic camera

    Get PDF
    The Standard Plenoptic Camera (SPC) is an innovation in photography, allowing for acquiring two-dimensional images focused at different depths, from a single exposure. Contrary to conventional cameras, the SPC consists of a micro lens array and a main lens projecting virtual lenses into object space. For the first time, the present research provides an approach to estimate the distance and depth of refocused images extracted from captures obtained by an SPC. Furthermore, estimates for the position and baseline of virtual lenses which correspond to an equivalent camera array are derived. On the basis of paraxial approximation, a ray tracing model employing linear equations has been developed and implemented using Matlab. The optics simulation tool Zemax is utilized for validation purposes. By designing a realistic SPC, experiments demonstrate that a predicted image refocusing distance at 3.5 m deviates by less than 11% from the simulation in Zemax, whereas baseline estimations indicate no significant difference. Applying the proposed methodology will enable an alternative to the traditional depth map acquisition by disparity analysis.European commisio

    Digital Refocusing: All-in-Focus Image Rendering Based on Holoscopic 3D Camera

    Get PDF
    This paper presents an innovative method for digital refocusing of different point in space after capturing and also extracts all-in-focus image. The proposed method extracts all-in-focus image using Michelson contrast formula hence, it helps in calculating the coordinates of the 3D object location. With light field integral camera setup the scene to capture the objects precisely positioned in a measurable distance from the camera therefore, it helps in refocusing process to return the original location where the object is focused; else it will be blurred with less contrast. The highest contrast values at different points in space can return the focused points where the objects are initially positioned as a result; all-in-focus image can also be obtained. Detailed experiments are conducted to demonstrate the credibility of proposed method with results

    Developing a user-centered accessible virtual reality video environment for severe visual disabilities

    Get PDF
    We address a timely issue of accessibility for visual information through the means of videos. Using emerging technologies (Head Mounted Virtual Reality Displays) and a user-centred design approach, we provide people with severe visual disabilities with a bespoke platform for accessing and viewing videos. We report on newly created test methods for measuring acuity within virtual spaces and reactions of impaired individuals, which informed our platform's design, to inform similar designs and allow testing and refinement for ecological and external validity. A prototype software for accessible virtual reality video viewing is presented, with a subsequent user evaluation to test the software, and a newer virtual reality head mounted display to determine usability while measuring how visually impaired users utilize elements in a virtual environment. We give guidance, based on empirical evidence, and advocate that although VR technologies are currently primarily targeted at a generic audience (gaming and entertainment), they can and should be further developed as assistive tools that enable independent living and increase the quality of life for those with disabilities, and specifically severe visual impairments

    Creating a bespoke virtual reality personal library space for persons with severe visual disabilities

    Get PDF
    This is an accepted manuscript of an article published by ACM in JCDL '20: Proceedings of the ACM/IEEE Joint Conference on Digital Libraries in 2020, available online: https://doi.org/10.1145/3383583.3398610 The accepted version of the publication may differ from the final published version.We present our work on creating a virtual reality personal library environment to enable people with severe visual disabilities to engage in reading tasks. The environment acts as a personal study or library to an individual, who under other circumstances would not be able to access or use a public library or a physical study at home.We present tests undertaken to identify the requirements and needs of our users to inform this environment and finally present the working prototype.Published versio
    corecore